skip to main content


Search for: All records

Creators/Authors contains: "Mamalakis, Antonios"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract

    Climate-driven changes in precipitation amounts and their seasonal variability are expected in many continental-scale regions during the remainder of the 21st century. However, much less is known about future changes in the predictability of seasonal precipitation, an important earth system property relevant for climate adaptation. Here, on the basis of CMIP6 models that capture the present-day teleconnections between seasonal precipitation and previous-season sea surface temperature (SST), we show that climate change is expected to alter the SST-precipitation relationships and thus our ability to predict seasonal precipitation by 2100. Specifically, in the tropics, seasonal precipitation predictability from SSTs is projected to increase throughout the year, except the northern Amazonia during boreal winter. Concurrently, in the extra-tropics predictability is likely to increase in central Asia during boreal spring and winter. The altered predictability, together with enhanced interannual variability of seasonal precipitation, poses new opportunities and challenges for regional water management.

     
    more » « less
  2. Abstract Many of our generation’s most pressing environmental science problems are wicked problems, which means they cannot be cleanly isolated and solved with a single ‘correct’ answer (e.g., Rittel 1973; Wirz 2021). The NSF AI Institute for Research on Trustworthy AI in Weather, Climate, and Coastal Oceanography (AI2ES) seeks to address such problems by developing synergistic approaches with a team of scientists from three disciplines: environmental science (including atmospheric, ocean, and other physical sciences), AI, and social science including risk communication. As part of our work, we developed a novel approach to summer school, held from June 27-30, 2022. The goal of this summer school was to teach a new generation of environmental scientists how to cross disciplines and develop approaches that integrate all three disciplinary perspectives and approaches in order to solve environmental science problems. In addition to a lecture series that focused on the synthesis of AI, environmental science, and risk communication, this year’s summer school included a unique Trust-a-thon component where participants gained hands-on experience applying both risk communication and explainable AI techniques to pre-trained ML models. We had 677 participants from 63 countries register and attend online. Lecture topics included trust and trustworthiness (Day 1), explainability and interpretability (Day 2), data and workflows (Day 3), and uncertainty quantification (Day 4). For the Trust-a-thon we developed challenge problems for three different application domains: (1) severe storms, (2) tropical cyclones, and (3) space weather. Each domain had associated user persona to guide user-centered development. 
    more » « less
  3. Abstract Convolutional neural networks (CNNs) have recently attracted great attention in geoscience due to their ability to capture non-linear system behavior and extract predictive spatiotemporal patterns. Given their black-box nature however, and the importance of prediction explainability, methods of explainable artificial intelligence (XAI) are gaining popularity as a means to explain the CNN decision-making strategy. Here, we establish an intercomparison of some of the most popular XAI methods and investigate their fidelity in explaining CNN decisions for geoscientific applications. Our goal is to raise awareness of the theoretical limitations of these methods and gain insight into the relative strengths and weaknesses to help guide best practices. The considered XAI methods are first applied to an idealized attribution benchmark, where the ground truth of explanation of the network is known a priori , to help objectively assess their performance. Secondly, we apply XAI to a climate-related prediction setting, namely to explain a CNN that is trained to predict the number of atmospheric rivers in daily snapshots of climate simulations. Our results highlight several important issues of XAI methods (e.g., gradient shattering, inability to distinguish the sign of attribution, ignorance to zero input) that have previously been overlooked in our field and, if not considered cautiously, may lead to a distorted picture of the CNN decision-making strategy. We envision that our analysis will motivate further investigation into XAI fidelity and will help towards a cautious implementation of XAI in geoscience, which can lead to further exploitation of CNNs and deep learning for prediction problems. 
    more » « less
  4. Abstract Despite the increasingly successful application of neural networks to many problems in the geosciences, their complex and nonlinear structure makes the interpretation of their predictions difficult, which limits model trust and does not allow scientists to gain physical insights about the problem at hand. Many different methods have been introduced in the emerging field of eXplainable Artificial Intelligence (XAI), which aims at attributing the network’s prediction to specific features in the input domain. XAI methods are usually assessed by using benchmark datasets (such as MNIST or ImageNet for image classification). However, an objective, theoretically derived ground truth for the attribution is lacking for most of these datasets, making the assessment of XAI in many cases subjective. Also, benchmark datasets specifically designed for problems in geosciences are rare. Here, we provide a framework, based on the use of additively separable functions, to generate attribution benchmark datasets for regression problems for which the ground truth of the attribution is known a priori. We generate a large benchmark dataset and train a fully connected network to learn the underlying function that was used for simulation. We then compare estimated heatmaps from different XAI methods to the ground truth in order to identify examples where specific XAI methods perform well or poorly. We believe that attribution benchmarks as the ones introduced herein are of great importance for further application of neural networks in the geosciences, and for more objective assessment and accurate implementation of XAI methods, which will increase model trust and assist in discovering new science. 
    more » « less
  5. Abstract

    Methods of explainable artificial intelligence (XAI) are used in geoscientific applications to gain insights into the decision-making strategy of neural networks (NNs), highlighting which features in the input contribute the most to a NN prediction. Here, we discuss our “lesson learned” that the task of attributing a prediction to the input does not have a single solution. Instead, the attribution results depend greatly on the considered baseline that the XAI method utilizes—a fact that has been overlooked in the geoscientific literature. The baseline is a reference point to which the prediction is compared so that the prediction can be understood. This baseline can be chosen by the user or is set by construction in the method’s algorithm—often without the user being aware of that choice. We highlight that different baselines can lead to different insights for different science questions and, thus, should be chosen accordingly. To illustrate the impact of the baseline, we use a large ensemble of historical and future climate simulations forced with the shared socioeconomic pathway 3-7.0 (SSP3-7.0) scenario and train a fully connected NN to predict the ensemble- and global-mean temperature (i.e., the forced global warming signal) given an annual temperature map from an individual ensemble member. We then use various XAI methods and different baselines to attribute the network predictions to the input. We show that attributions differ substantially when considering different baselines, because they correspond to answering different science questions. We conclude by discussing important implications and considerations about the use of baselines in XAI research.

    Significance Statement

    In recent years, methods of explainable artificial intelligence (XAI) have found great application in geoscientific applications, because they can be used to attribute the predictions of neural networks (NNs) to the input and interpret them physically. Here, we highlight that the attributions—and the physical interpretation—depend greatly on the choice of the baseline—a fact that has been overlooked in the geoscientific literature. We illustrate this dependence for a specific climate task, in which a NN is trained to predict the ensemble- and global-mean temperature (i.e., the forced global warming signal) given an annual temperature map from an individual ensemble member. We show that attributions differ substantially when considering different baselines, because they correspond to answering different science questions.

     
    more » « less
  6. null (Ed.)
    Abstract Spectral PCA (sPCA), in contrast to classical PCA, offers the advantage of identifying organized spatiotemporal patterns within specific frequency bands and extracting dynamical modes. However, the unavoidable trade-off between frequency resolution and robustness of the PCs leads to high sensitivity to noise and overfitting, which limits the interpretation of the sPCA results. We propose herein a simple nonparametric implementation of sPCA using the continuous analytic Morlet wavelet as a robust estimator of the cross-spectral matrices with good frequency resolution. To improve the interpretability of the results, especially when several modes of similar amplitude exist within the same frequency band, we propose a rotation of the complex-valued eigenvectors to optimize their spatial regularity (smoothness). The developed method, called rotated spectral PCA (rsPCA), is tested on synthetic data simulating propagating waves and shows impressive performance even with high levels of noise in the data. Applied to global historical geopotential height (GPH) and sea surface temperature (SST) daily time series, the method accurately captures patterns of atmospheric Rossby waves at high frequencies (3–60-day periods) in both GPH and SST and El Niño–Southern Oscillation (ENSO) at low frequencies (2–7-yr periodicity) in SST. At high frequencies the rsPCA successfully unmixes the identified waves, revealing spatially coherent patterns with robust propagation dynamics. 
    more » « less
  7. null (Ed.)
    Abstract Understanding the physical drivers of seasonal hydroclimatic variability and improving predictive skill remains a challenge with important socioeconomic and environmental implications for many regions around the world. Physics-based deterministic models show limited ability to predict precipitation as the lead time increases, due to imperfect representation of physical processes and incomplete knowledge of initial conditions. Similarly, statistical methods drawing upon established climate teleconnections have low prediction skill due to the complex nature of the climate system. Recently, promising data-driven approaches have been proposed, but they often suffer from overparameterization and overfitting due to the short observational record, and they often do not account for spatiotemporal dependencies among covariates (i.e., predictors such as sea surface temperatures). This study addresses these challenges via a predictive model based on a graph-guided regularizer that simultaneously promotes similarity of predictive weights for highly correlated covariates and enforces sparsity in the covariate domain. This approach both decreases the effective dimensionality of the problem and identifies the most predictive features without specifying them a priori. We use large ensemble simulations from a climate model to construct this regularizer, reducing the structural uncertainty in the estimation. We apply the learned model to predict winter precipitation in the southwestern United States using sea surface temperatures over the entire Pacific basin, and demonstrate its superiority compared to other regularization approaches and statistical models informed by known teleconnections. Our results highlight the potential to combine optimally the space–time structure of predictor variables learned from climate models with new graph-based regularizers to improve seasonal prediction. 
    more » « less
  8. null (Ed.)
  9. Abstract

    Precipitation prediction at seasonal timescales is important for planning and management of water resources as well as preparedness for hazards such as floods, droughts and wildfires. Quantifying predictability is quite challenging as a consequence of a large number of potential drivers, varying antecedent conditions, and small sample size of high‐quality observations available at seasonal timescales, that in turn, increases prediction uncertainty and the risk of model overfitting. Here, we introduce a generalized probabilistic framework to account for these issues and assess predictability under uncertainty. We focus on prediction of winter (Nov–Mar) precipitation across the contiguous United States, using sea surface temperature‐derived indices (averaged in Aug–Oct) as predictors. In our analysis we identify “predictability hotspots,” which we define as regions where precipitation is inherently more predictable. Our framework estimates the entire predictive distribution of precipitation using copulas and quantifies prediction uncertainties, while employing principal component analysis for dimensionality reduction and a cross validation technique to avoid overfitting. We also evaluate how predictability changes across different quantiles of the precipitation distribution (dry, normal, wet amounts) using a multi‐category 3 × 3 contingency table. Our results indicate that well‐defined predictability hotspots occur in the Southwest and Southeast. Moreover, extreme dry and wet conditions are shown to be relatively more predictable compared to normal conditions. Our study may help with water resources management in several subregions of the United States and can be used to assess the fidelity of earth system models in successfully representing teleconnections and predictability.

     
    more » « less